Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to onprem OCP deployment #572

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

radez
Copy link
Collaborator

@radez radez commented Nov 19, 2024

Copy link

openshift-ci bot commented Nov 20, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: radez

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@radez radez changed the title Specify Podman as the deploy type for the bastion AI container Updates to onprem OCP deployment Nov 20, 2024
- Specify Podman as the deploy type for the bastion AI container
example podman configmap: https://github.com/openshift/assisted-service/blob/master/deploy/podman/configmap.yml

- configure cluster networking before workers are booted
@@ -12,6 +12,7 @@ ASSISTED_SERVICE_HOST={{ assisted_installer_host }}:{{ assisted_installer_port }
IMAGE_SERVICE_BASE_URL=http://{{ assisted_installer_host }}:{{ assisted_image_service_port }}
LISTEN_PORT={{ assisted_image_service_port }}
DEPLOY_TARGET=onprem
DEPLOY_TYPE="Podman"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious as to what is the significance of adding this since we have not explicitly set this before? Does it change Assisted-service behavior in anyway?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure exactly, I'm having trouble getting a large deployment to happen and the AI folks suggested that since we're using podman we should include this flag.

I have seemed to have better stability since, though I don't know how much to attribute it to that flag.

Comment on lines -58 to -95
- name: Patch cluster network settings
uri:
url: "http://{{ assisted_installer_host }}:{{ assisted_installer_port }}/api/assisted-install/v2/clusters/{{ ai_cluster_id }}"
method: PATCH
status_code: [201]
return_content: true
body_format: json
body: {
"cluster_networks": [
{
"cidr": "{{ cluster_network_cidr }}",
"cluster_id": "{{ ai_cluster_id }}",
"host_prefix": "{{ cluster_network_host_prefix }}"
}
],
"service_networks": [
{
"cidr": "{{ service_network_cidr }}",
"cluster_id": "{{ ai_cluster_id }}",
}
]
}

- name: Patch cluster ingress/api vip addresses
uri:
url: "http://{{ assisted_installer_host }}:{{ assisted_installer_port }}/api/assisted-install/v2/clusters/{{ ai_cluster_id }}"
method: PATCH
status_code: [201]
return_content: true
body_format: json
body: {
"cluster_network_host_prefix": "{{ cluster_network_host_prefix }}",
"vip_dhcp_allocation": "{{ vip_dhcp_allocation }}",
"ingress_vips": [{"ip": "{{ controlplane_network_ingress }}"}],
"api_vips": [{"ip": "{{ controlplane_network_api }}"}],
"network_type": "{{ networktype }}"
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the context of removing this? I attempted to go back in history to see when it was first moved or added here however it seems it was always this way, I vaguely recall that it was required to be set here after machines were discovered but it could just be unnecessary.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it all got moved into ansible/roles/create-ai-cluster/tasks/main.yml
vips are added to the initial configuration and the network patch just after cluster config.
I'm not sure we still need the network patch. I haven't tested with out it yet, but all the info in that patch is already in the initial cluster configuration.

Doing these updates triggers a network change that makes all the workers revalidate networking. When there's a large number of workers it's a bit hit on the network and time to wait for the cluster to become ready. Configuring them as part of the initial cluster setup avoids having to do this revalidation.

Copy link
Member

@akrzos akrzos Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, I was not aware that the network updates could trigger actions on the discovered nodes. Thanks for sharing, let us know of the status once you test with a real deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants